special type
Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes a new online structure learning technique for feed-forward and recurrent SPNs. The algorithm is demonstrated on real-world datasets with continuous features for which it is not clear what network architecture might be best, including sequence datasets of varying length.
Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes a new online structure learning technique for feed-forward and recurrent SPNs. The algorithm is demonstrated on real-world datasets with continuous features for which it is not clear what network architecture might be best, including sequence datasets of varying length.
7bb060764a818184ebb1cc0d43d382aa-Reviews.html
The paper provides a theoretical analysis of the types of distributions that can efficiently be represented by restricted Boltzmann machines (RBMs). The analysis is based on a representation of the unnormalized log probability (free energy) of RBMs as a special form of neural network (NN). The paper relates these RBM networks to more common types of NNs whose properties have been studied in the literature. This approach allows the authors to identify two non-trivial examples of functions that can and cannot be represented efficiently by RBM networks - and hence related distributions can / cannot be modeled efficiently by RBMs. Specifically they show that RBM networks can efficiently represent any function that only depends on the number of non-zero visible units, such as parity, but that they are unable to represent the only somewhat more difficult example of inner product parity.
Online Structure Learning for Feed-Forward and Recurrent Sum-Product Networks
Kalra, Agastya, Rashwan, Abdullah, Hsu, Wei-Shou, Poupart, Pascal, Doshi, Prashant, Trimponias, Georgios
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes a new online structure learning technique for feed-forward and recurrent SPNs. The algorithm is demonstrated on real-world datasets with continuous features for which it is not clear what network architecture might be best, including sequence datasets of varying length.
Adapting AI Behaviors To Players in Driver San Francisco: Hinted-Execution Behavior Trees
Ocio, Sergio (Ubisoft Entertainment)
The creative nature of games makes trying new ideas desirable, but these changes are sometimes very risky. We need to find ways to minimize risks while we build innovative experiences. Driver San Francisco did this by using Hinted-execution Behavior Trees; this technique allows developers to modify existing AI behaviors dynamically with very low risk, and was used to adapt Driver’s getaway AI to players’ skills.
- North America > United States > California > San Francisco County > San Francisco (0.62)
- North America > United States > Massachusetts (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Spain > Asturias > Oviedo Province > Oviedo (0.04)